12 research outputs found

    Reasons for marketing metric importance in Finnish B2B markets - Twin-study approach

    Get PDF
    TUTKIMUKSEN TAVOITTEET Markkinoinnin suorituskykyä ja taloudellisuutta on ollut perinteisesti vaikea mitata. Markkinointia onkin pidetty pehmeänä tieteenä jonka suorituskykyä ei voida kvantifioida numeerisiksi mittareiksi. Markknoinnin suorituskyvyn mittaus on kuitenkin kehittynyt sisältämään suuren joukon mittareita joilla suorituskyvyn saa vertailukelpoiseksi eri yritysten ja ajanjaksojen välillä. Vaikka suuri lukumäärä mittareita tuottavat paljon tietoa, markkinoijien tulee pystyä tunnistamaan heidän liiketoiminnan kannalta oleellisimmat markkinoinnin suorituskyvyn mittarit. Tutkielman tavoitteena on tunnistaa yritysten ja markkinoiden ominaispiirteitä Suomen B2B markkinoilla, joilla on vaikutusta yksittäisten mittareiden tärkeänä pitämiseen. METODOLOGIA Tutkielmassa käytettiin kahta kokeellista tutkimusmenetelmää B2B markkinoinnin suorituskykymittauksen tutkimiseksi, koska alueella ei ole ollut paljon tutkimuksia. Tutkimusmenetelmillä pyrittiin saamaan tutkimusalue paremmin hahmotettua kuin yhden menetelmän tutkimuksessa. Ensin tutkimusaluetta hahmotettiin tapaustutkimusmenetelmin haastattelemalla yhdestä alan yrityksestä kymmentä avainhenkilöä. Tapaustutkimuksesta saatuja testattiin varianssianalyysissä, jossa aineistona käytettiin vuonna 2010 kerättyä Stratmark -hankkeessa kerättyä aineistoa. Kvantitatiivisessa tutkimuksessa testattiin tapaustutkimuksessa havaittujen tekijöiden vaikutusta 41 markkinoinnin mittarin tärkeyteen suomalaisissa B2B markkinoiden yrityksissä. TULOKSET Tapaustutukimuksessa haastateltavat jakoivat markkinat hyökkäävää markkinointia ja puolustavaa markkinointia vaativiin markkinoihin ja yrityksiin, jotka vaativat eri keinoja ja mittareita markkinoinnissa. Kaksi faktoria valittiin kvantitatiiviseen tutkimukseen: yrityksen suhteellinen markkina-asema ja markkinan elinkaaren vaihe. Varianssianalyysissa havaittiin faktoreilla olevan yhdeksän mittarin tärkeyteen tilastollisesti merkittävä vaikutus

    Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices

    Full text link
    Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth Generation (5G) mobile networks. MEC facilitates distributed cloud computing capabilities and information technology service environment for applications and services at the edges of mobile networks. This architectural modification serves to reduce congestion, latency, and improve the performance of such edge colocated applications and devices. In this paper, we demonstrate how reactive service migration can be orchestrated for low-power MEC-enabled Internet of Things (IoT) devices. Here, we use open-source Kubernetes as container orchestration system. Our demo is based on traditional client-server system from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As the use case scenario, we post-process live video received over web real-time communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1 handovers, demonstrating MEC-based software defined network (SDN). Now, edge applications may reactively follow the UE within the radio access network (RAN), expediting low-latency. The collected data is used to analyze the benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end (E2E) latency and power requirements of the UE are improved. We further discuss the challenges of implementing such schemes and future research directions therein

    Unleashing GPUs for Network Function Virtualization: an open architecture based on Vulkan and Kubernetes

    Get PDF
    International audienceGeneral-purpose computing on graphics processing units (GPGPU) is a promising way to speed up computationally intensive network functions, such as performing real-time traffic classification based on machine learning. Recent studies have focused on integrated graphics units and various performance optimizations to address bottlenecks such as latency. However, these approaches tend to produce architecture-specific binaries and lack the orchestration of functions. A complementary effort would be a GPGPU architecture based on standard and open components, which allows the creation of interoperable and orchestrable network functions. This study describes and evaluates such open architecture based on the cross-platform Vulkan API, in which we execute handwritten SPIR-V code as a network function. We also demonstrate a multi-node orchestration approach for our proposed architecture using Kubernetes. We validate our architecture by executing SPIR-V code performing traffic classification with random forest inference. We test this application both on discrete and integrated graphics cards and on x86 and ARM. We find that in all cases the GPUs are faster than the baseline Cython code

    Latency-optimized edge computing in Fifth Generation (5G) cellular networks

    No full text
    The purpose of this thesis is to research latency-optimized edge computing in 5G cellular networks. In specific, the research focuses on low-latency software-defined services on open-source software (OSS) core network implementations. A literature review revealed that there are few OSS implementations of Long Term Evolution (LTE) (let alone 5G) core networks in existence. It was also found out that OSS is essential in research to allow latency optimizations deep in the software layer. These optimizations were found hard or impossible to install on proprietary systems. As such, to achieve minimal latency in end-to-end (E2E) over-the-air (OTA) testing, an OSS core network was installed at the University of Oulu to operate in conjunction with the existing proprietary one. This thesis concludes that a micro-operator can be run on current OSS LTE core network implementations. Latency-wise, it was found that current LTE modems are capable of achieving an E2E latency of around 15ms in OTA testing. As a contribution, an OSS infrastructure was installed to the University of Oulu. This infrastructure may serve the needs of academics better than a proprietary one. For example, experimentation of off-the-specification functionality in core networks should be more accessible. The installation also enables easy addition of arbitrary hardware. This might be useful in research on tailored services through mobile edge computing (MEC) in the micro-operator paradigm. Finally, it is worth noting that the test network at Oulu University is operating at a rather small scale. Thus, it remains an open question if and how bigger mobile network operators (MNOs) can provide latency-optimized services while balancing with throughput and quality of service (QoS).Tämän opinnäytetyön tarkoituksena on tutkia vasteaikaoptimoitua reunalaskentaa 5G matkapuhelinverkoissa. Tarkemmin määritellen, työn tarkoituksena on keskittyä alhaisen latenssin palveluihin, jotka toimivat avoimen lähdekoodin ydinverkkoimplementaatioiden päällä. Kirjallisuuskatsaus osoitti että vain pieni määrä avoimen lähdekoodin toteutuksia LTE verkkoimplementaatioista on saatavilla. Lisäksi havainnointiin että avoimen lähdekoodin ohjelmistot ovat osa latenssitutkimusta, jotka vaativat optimointeja syvällä ohjelmistorajapinnassa. Minimaalisen vasteajan saavuttamiseksi, avoimen lähdekoodin ydinverkko asennettiin Oulun yliopistolla toimimaan rinnakkain olemassaolevan suljetun järjestelmän kanssa. Tämä opinnäytetyön johtopäätöksien mukaan mikro-operaattori voi toimia nykyisten avoimen lähdekoodin LTE ydinverkkojen avulla. Vasteajaksi kahden laitteen välillä saavutettiin noin 15ms. Kontribuutioksi lukeutui avoimen lähdekoodin radioverkkoinfrastruktuurin asentaminen Oulun yliopistolle. Tämä avoin infrastruktuuri voinee palvella tutkijoiden tarpeita paremmin kuin suljettu järjestelmä. Esimerkiksi, ydinverkkojen testaus virallisten määrittelyn ulkopuolisilla ominaisuuksilla pitäisi olla helpompaa kuin suljetulla järjestelmällä. Lisäksi asennus mahdollistaa mielivaltaisen laskentaraudan lisäämisen mobiiliverkkoon. Tämä voi olla hyödyllistä räätälöityjen reunalaskentapalveluiden tutkimuksessa mikro-operaattoreiden suhteen. Lopuksi on hyvä mainita että Oulun yliopiston testiverkko toimii suhteellisen pienellä skaalalla. Täten kysymykseksi jää miten suuremmat mobiiliverkkojen tarjoajat voivat toteuttaa vasteaikaoptimoituja palveluita suoritustehoa ja palvelunlaatua uhraamatta

    Interoperable GPU kernels as latency improver for MEC

    No full text
    Abstract Mixed reality (MR) applications are expected to become common when 5G goes mainstream. However, the latency requirements are challenging to meet due to the resources required by video-based remoting of graphics, that is, decoding video codecs. We propose an approach towards tackling this challenge: a client-server implementation for transacting intermediate representation (IR) between a mobile UE and a MEC server instead of video codecs and this way avoiding video decoding. We demonstrate the ability to address latency bottlenecks on edge computing workloads that transact graphics. We select SPIR-V compatible GPU kernels as the intermediate representation. Our approach requires know-how in GPU architecture and GPU domain-specific languages (DSLs), but compared to video-based edge graphics, it decreases UE device delay by sevenfold. Further, we find that due to low cold-start times on both UEs and MEC servers, application migration can happen in milliseconds. We imply that graphics-based location-aware applications, such as MR, can benefit from this kind of approach

    Unleashing GPUs for Network Function Virtualization:an open architecture based on Vulkan and Kubernetes

    Get PDF
    Abstract General-purpose computing on graphics processing units (GPGPU) is a promising way to speed up computationally intensive network functions, such as performing real-time traffic classification based on machine learning. Recent studies have focused on integrated graphics units and various performance optimizations to address bottlenecks such as latency. However, these approaches tend to produce architecture-specific binaries and lack the orchestration of functions. A complementary effort would be a GPGPU architecture based on standard and open components, which allows the creation of interoperable and orchestrable network functions. This study describes and evaluates such open architecture based on the cross-platform Vulkan API, in which we execute hand-written SPIR-V code as a network function. We also demonstrate a multi-node orchestration approach for our proposed architecture using Kubernetes. We validate our architecture by executing SPIR-V code performing traffic classification with random forest inference. We test this application both on discrete and integrated graphics cards and on x86 and ARM. We find that in all cases the GPUs are faster than the baseline Cython code

    SDN enhanced resource orchestration of containerized edge applications for industrial IoT

    Get PDF
    Abstract With the rise of the Industrial Internet of Things (IIoT), there is an intense pressure on resource and performance optimization leveraging on existing technologies, such as Software Defined Networking (SDN), edge computing, and container orchestration. Industry 4.0 emphasizes the importance of lean and efficient operations for sustainable manufacturing. Achieving this goal would require engineers to consider all layers of the system, from hardware to software, and optimizing for resource efficiency at all levels. This emphasizes the need for container-based virtualization tools such as Docker and Kubernetes, offering Platform as a Service (PaaS), while simultaneously leveraging on edge technologies to reduce related latencies. For network management, SDN is poised to offer a cost-effective and dynamic scalability solution by customizing packet handling for various edge applications and services. In this paper, we investigate the energy and latency trade-offs involved in combining these technologies for industrial applications. As a use case, we emulate a 3D-drone-based monitoring system aimed at providing real-time visual monitoring of industrial automation. We compare a native implementation to a containerized implementation where video processing is orchestrated while streaming is handled by an external UE representing the IIoT device. We compare these two scenarios for energy utilization, latency, and responsiveness. Our test results show that only roughly 16 percent of the total power consumption happens on the mobile node when orchestrated. Virtualization adds up about 4.5 percent of the total power consumption while the latency difference between the two approaches becomes negligible after the streaming session is initialized
    corecore